28 research outputs found

    Underreported in-water behaviours of the loggerhead sea turtle: Getting buried in the sand

    Get PDF

    Bilevel Training Schemes in Imaging for Total Variation--Type Functionals with Convex Integrands

    Get PDF
    In the context of image processing, given a kk-th order, homogeneous and linear differential operator with constant coefficients, we study a class of variational problems whose regularizing terms depend on the operator. Precisely, the regularizers are integrals of spatially inhomogeneous integrands with convex dependence on the differential operator applied to the image function. The setting is made rigorous by means of the theory of Radon measures and of suitable function spaces modeled on BVBV. We prove the lower semicontinuity of the functionals at stake and existence of minimizers for the corresponding variational problems. Then, we embed the latter into a bilevel scheme in order to automatically compute the space-dependent regularization parameters, thus allowing for good flexibility and preservation of details in the reconstructed image. We establish existence of optima for the scheme and we finally substantiate its feasibility by numerical examples in image denoising. The cases that we treat are Huber versions of the first and second order total variation with both the Huber and the regularization parameter being spatially dependent. Notably the spatially dependent version of second order total variation produces high quality reconstructions when compared to regularizations of similar type, and the introduction of the spatially dependent Huber parameter leads to a further enhancement of the image details.Comment: 27 pages, 6 figure

    Unrolled three-operator splitting for parameter-map learning in Low Dose X-ray CT reconstruction

    Get PDF
    We propose a method for fast and automatic estimation of spatially dependent regularization maps for total variation-based (TV) tomography reconstruction. The estimation is based on two distinct sub-networks, with the first sub-network estimating the regularization parameter-map from the input data while the second one unrolling T iterations of the Primal-Dual Three-Operator Splitting (PD3O) algorithm. The latter approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is then trained end-to-end in a supervised learning fashion using pairs of clean-corrupted data but crucially without the need of having access to labels for the optimal regularization parameter-maps

    Learning Regularization Parameter-Maps for Variational Image Reconstruction Using Deep Neural Networks and Algorithm Unrolling

    Get PDF
    We introduce a method for the fast estimation of data-adapted, spatially and temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV) minimization. The proposed approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs) and relies on two distinct subnetworks. The first subnetwork estimates the regularization parameter-map from the input data. The second subnetwork unrolls iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is then trained end-to-end in a supervised learning fashion using pairs of clean and corrupted data but crucially without the need for access to labels for the optimal regularization parameter-maps. We first prove consistency of the unrolled scheme by showing that the unrolled minimizing energy functional used for the supervised learning -converges, as tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. Then, we apply and evaluate the proposed method on a variety of large-scale and dynamic imaging problems with retrospectively simulated measurement data for which the automatic computation of such regularization parameters has been so far challenging using the state-of-the-art methods: a 2D dynamic cardiac magnetic resonance imaging (MRI) reconstruction problem, a quantitative brain MRI reconstruction problem, a low-dose computed tomography problem, and a dynamic image denoising problem. The proposed method consistently improves the TV reconstructions using scalar regularization parameters, and the obtained regularization parameter-maps adapt well to imaging problems and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the subsequent reconstruction algorithm is interpretable since it inherits the properties (e.g., convergence guarantees) of the iterative reconstruction method from which the network is implicitly defined

    Total Directional Variation for Video Denoising

    Get PDF
    In this paper, we propose a variational approach for video denoising, based on a total directional variation (TDV) regulariser proposed in Parisotto et al. (2018), for image denoising and interpolation. In the TDV regulariser, the underlying image structure is encoded by means of weighted derivatives so as to enhance the anisotropic structures in images, e.g. stripes or curves with a dominant local directionality. For the extension of TDV to video denoising, the space-time structure is captured by the volumetric structure tensor guiding the smoothing process. We discuss this and present our whole video denoising work-flow. Our numerical results are compared with some state-of-the-art video denoising methods.SP acknowledges UK EPSRC grant EP/L016516/1 for the CCA DTC. CBS acknowledges support from Leverhulme Trust project on Breaking the non-convexity barrier, EPSRC grant Nr. EP/M00483X/1, the EPSRC Centre EP/N014588/1, the RISE projects CHiPS and NoMADS, the CCIMI and the Alan Turing Institute

    A combined first and second order variational approach for image reconstruction

    Full text link
    In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler's elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images -- a known disadvantage of the ROF model -- while being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure

    Bilevel Parameter Learning for Higher-Order Total Variation Regularisation Models.

    Get PDF
    We consider a bilevel optimisation approach for parameter learning in higher-order total variation image reconstruction models. Apart from the least squares cost functional, naturally used in bilevel learning, we propose and analyse an alternative cost based on a Huber-regularised TV seminorm. Differentiability properties of the solution operator are verified and a first-order optimality system is derived. Based on the adjoint information, a combined quasi-Newton/semismooth Newton algorithm is proposed for the numerical solution of the bilevel problems. Numerical experiments are carried out to show the suitability of our approach and the improved performance of the new cost functional. Thanks to the bilevel optimisation framework, also a detailed comparison between TGV 2 and ICTV is carried out, showing the advantages and shortcomings of both regularisers, depending on the structure of the processed images and their noise level.King Abdullah University of Science and Technology (KAUST) (Grant ID: KUKI1-007-43), Engineering and Physical Sciences Research Council (Grant IDs: Nr. EP/J009539/1 “Sparse & Higher-order Image Restoration” and Nr. EP/M00483X/1 “Efficient computational tools for inverse imaging problems”), Escuela PolitĂ©cnica Nacional de Quito (Grant ID: PIS 12-14, MATHAmSud project SOCDE “Sparse Optimal Control of Differential Equations”), Leverhulme Trust (project on “Breaking the non-convexity barrier”), SENESCYT (Ecuadorian Ministry of Higher Education, Science, Technology and Innovation) (Prometeo Fellowship)This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.1007/s10851-016-0662-
    corecore